Goto

Collaborating Authors

 Issaquah


Not All Data Are Unlearned Equally

Krishnan, Aravind, Reddy, Siva, Mosbach, Marius

arXiv.org Artificial Intelligence

Machine unlearning is concerned with the task of removing knowledge learned from particular data points from a trained model. In the context of large language models (LLMs), unlearning has recently received increased attention, particularly for removing knowledge about named entities from models for privacy purposes. While various approaches have been proposed to address the unlearning problem, most existing approaches treat all data points to be unlearned equally, i.e., unlearning that Montreal is a city in Canada is treated exactly the same as unlearning the phone number of the first author of this paper. In this work, we show that this all data is equal assumption does not hold for LLM unlearning. We study how the success of unlearning depends on the frequency of the knowledge we want to unlearn in the pre-training data of a model and find that frequency strongly affects unlearning, i.e., more frequent knowledge is harder to unlearn. Additionally, we uncover a misalignment between probability and generation-based evaluations of unlearning and show that this problem worsens as models become larger. Overall, our experiments highlight the need for better evaluation practices and novel methods for LLM unlearning that take the training data of models into account.


From Bytes to Bites: Using Country Specific Machine Learning Models to Predict Famine

Kapoor, Salloni, Sayer, Simeon

arXiv.org Artificial Intelligence

Hunger crises are critical global issues affecting millions, particularly in low-income and developing countries. This research investigates how machine learning can be utilized to predict and inform decisions regarding famine and hunger crises. By leveraging a diverse set of variables (natural, economic, and conflict-related), three machine learning models (Linear Regression, XGBoost, and RandomForestRegressor) were employed to predict food consumption scores, a key indicator of household nutrition. The RandomForestRegressor emerged as the most accurate model, with an average prediction error of 10.6%, though accuracy varied significantly across countries, ranging from 2% to over 30%. Notably, economic indicators were consistently the most significant predictors of average household nutrition, while no single feature dominated across all regions, underscoring the necessity for comprehensive data collection and tailored, country-specific models. These findings highlight the potential of machine learning, particularly Random Forests, to enhance famine prediction, suggesting that continued research and improved data gathering are essential for more effective global hunger forecasting.


'Staying silent? Not an option': family takes fight against deepfake nudes to Washington

The Guardian

In October last year Francesa Mani came home from school in the suburbs of New Jersey with devastating news for her mother, Dorota. Earlier in the day the 14-year-old had been called into the vice-principal's office and notified that she and a group of girls at Westfield High had been the victims of targeted abuse by a fellow student. Faked nude images of her and others had been circulating around school. They had been generated by artificial intelligence. Dorota had been tangentially aware of the power of this relatively new technology, but the ease with which the images were generated took her aback.


Remote Computer Vision Engineer openings near you -Updated October 08, 2022 - Remote Tech Jobs

#artificialintelligence

Role requiring'No experience data provided' months of experience in None Pay if you succeed in getting hired and start work at a high-paying job first. Get Paid to Read Emails, Play Games, Search the Web, $5 Signup Bonus. At the Space Dynamics Laboratory, we take pride in and highly value our employees. We are seeking mid-level computer vision engineers to work with an agile approach on the next generation of satellite ground systems supporting national defense. SDL offers competitive salaries and fantastic benefits, including: • Flexible work schedules that fit your style-every Friday off, every other Friday off, possible work from home days, or simply traditional hours • Generous paid leisure and sick leave, ensuring you never miss a special event • A 14.2% employer retirement contribution into a 401(a) account-no matching required! Required Qualifications: • Bachelor's degree in computer vision, computer science, aerospace engineering, or a related discipline • 5 years professional experience in design and implementation of computer vision technologies, with emphasis in support of relative navigation • Experience with Mathworks, C, and Python for image processing, computer vision, and deep learning applications • Ability to architect the framework that is used to develop, and deploy computer vision, and deep learning applications • Experience in integrating to GNC and CDH components within embedded architectures • Experience with common software development practices, including: • Agile/Scrum or similar methodologies • Version control and continuous integration • Testing strategies and code testability • Must be a U.S. citizen and be able to obtain a U.S. Government Security Clearance Let us know in your application materials if you possess the following: • Experience designing modular software and communicating/refining designs independently or with a team through whiteboarding, diagrams, UML, etc. • Experience with Atlassian management tools (JIRA, Confluence, Bitbucket, etc.) • Ability to provide mentoring, leadership, and experience sharing with junior engineers • Experience with satellite ground or flight systems • Personal interest in space, space exploration, and space technologies • Active Top Secret security clearance SDL supports a variety of missions, including NASA's vision to reveal the unknown for the benefit of humankind and the Department of Defense's aim to protect our Nation on the ground, in the air, and in space. Our sensors, satellites, software systems, and science and engineering play an essential role in some important missions you've heard of, and others that you haven't. For questions or assistance with the application process or the DoD SkillBridge program, please contact employment@sdl.usu.edu.


Where Semantics and Machine Learning Converge

#artificialintelligence

Artificial Intelligence has a long history of oscillating between two somewhat contradictory poles. On one side, exemplified by Noam Chomsky, Marvin Minsky, Seymour Papert, and many others, is the idea that cognitive intelligence was algorithmic in nature - that there were a set of fundamental precepts that formed the foundation of language, and by extension, intelligence. On the other side were people like Donald Hebb, Frank Rosenblatt, Wesley Clarke, Henry Kelly, Arthur Bryson, Jr., and others, most not even as remotely well known, who developed over time gradient descent, genetic algorithms, back propagation and other pieces of what would become known as neural networks. The rivalry between the two camps was fierce, and for a while, after Minsky and Papert's fairly damning analysis of Rosenblatt's Perceptron, one of the first neural model, it looked like the debate had been largely settled in the direction of the algorithmic approach. In hindsight, the central obstacle that both sides faced (and one that would put artificial intelligence research into a deep winter for more than a decade) was that both underestimated how much computing power would be needed for either one of the models to actually bear fruit, and it would take another fifty years (and an increase of computing factor by twenty-one orders of magnitude, around 1 quadrillion times) before computers and networks reached a point where either of these technologies was feasible. As it turns out, both sides were actually right in some areas and wrong in others.


Generational Frameshifts in Technology: Computer Science and Neurosurgery, The VR Use Case

Browd, Samuel R., Sharma, Maya, Sharma, Chetan

arXiv.org Artificial Intelligence

We are at a unique moment in history where there is a confluence of technologies which will synergistically come together to transform the practice of neurosurgery. These technological transformations will be all-encompassing, including improved tools and methods for intraoperative performance of neurosurgery, scalable solutions for asynchronous neurosurgical training and simulation, as well as broad aggregation of operative data allowing fundamental changes in quality assessment, billing, outcome measures, and dissemination of surgical best practices. The ability to perform surgery more safely and more efficiently while capturing the operative details and parsing each component of the operation will open an entirely new epoch advancing our field and all surgical specialties. The digitization of all components within the operating room will allow us to leverage the various fields within computer and computational science to obtain new insights that will improve care and delivery of the highest quality neurosurgery regardless of location. The democratization of neurosurgery is at hand and will be driven by our development, extraction, and adoption of these tools of the modern world. Virtual reality provides a good example of how consumer-facing technologies are finding a clear role in industry and medicine and serves as a notable example of the confluence of various computer science technologies creating a novel paradigm for scaling human ability and interactions. The authors describe the technology ecosystem that has come and highlight a myriad of computational and data sciences that will be necessary to enable the operating room of the near future.


Ultrasound Scatterer Density Classification Using Convolutional Neural Networks by Exploiting Patch Statistics

Tehrani, Ali K. Z., Amiri, Mina, Rosado-Mendez, Ivan M., Hall, Timothy J., Rivaz, Hassan

arXiv.org Artificial Intelligence

Quantitative ultrasound (QUS) can reveal crucial information on tissue properties such as scatterer density. If the scatterer density per resolution cell is above or below 10, the tissue is considered as fully developed speckle (FDS) or low-density scatterers (LDS), respectively. Conventionally, the scatterer density has been classified using estimated statistical parameters of the amplitude of backscattered echoes. However, if the patch size is small, the estimation is not accurate. These parameters are also highly dependent on imaging settings. In this paper, we propose a convolutional neural network (CNN) architecture for QUS, and train it using simulation data. We further improve the network performance by utilizing patch statistics as additional input channels. We evaluate the network using simulation data, experimental phantoms and in vivo data. We also compare our proposed network with different classic and deep learning models, and demonstrate its superior performance in classification of tissues with different scatterer density values. The results also show that the proposed network is able to work with different imaging parameters with no need for a reference phantom. This work demonstrates the potential of CNNs in classifying scatterer density in ultrasound images.


A Case Against Mission-Critical Applications of Machine Learning

Communications of the ACM

How can we trust the networks?" They answered: "We know that a network is quite reliable when its inputs come from its training set. But these critical systems will have inputs corresponding to new, often unanticipated situations. There are numerous examples where a network gives poor responses for untrained inputs." David Lorge Parnas followed up on this discussion in his Letter to the Editor (Feb. We wish to point out that machine learning-based systems, including commercial ones performing safety critical tasks, can fail not only under "unanticipated situations" (noted by Lewis and Denning) or "when it encounters data radically different from its training set" (noted by Parnas), but also under normal situations, even on data that is extremely similar to its training set. The Apollo self-driving team confirmed "it might happen" because the system was "deep learning trained." Now, after a further investigation, we have found that in 24 of these 27 failed tests, the 10 random points ...


Are Internet-connected devices eavesdropping on our conversations?

AITopics Original Links

Like a lot of teenagers, Aanya Nigam reflexively shares her whereabouts, activities and thoughts on Twitter, Instagram and other social networks without a qualm. But Aanya's care-free attitude dissolved into paranoia a few months ago shortly after her mother bought Amazon's Echo, a digital assistant that can be set up in a home or office to listen for various requests, such as for a song, a sports score, the weather, or even a book to be read aloud. After using the Internet-connected device for two months, Aanya, 16, started to worry that the Echo was eavesdropping on conversations in her Issaquah, Washington, living room. So she unplugged the device and hid it in a place that her mother, Anjana Agarwal, still hasn't been able to find. "I guess there is a difference between deciding to share something and having something captured by something that you don't know when it's listening," Agarwal said of her daughter's misgivings. The Echo, a $180 cylindrical device that began general shipping in July after months of public testing, is the latest advance in voice-recognition technology that's enabling machines to record snippets of conversation that are analyzed and stored by companies promising to make their customers' lives better.


Best of the web: Artificial Intelligence news for November 15, 2016

#artificialintelligence

Tagged In Issaquah, Washington King County Library System Due Date Packt As part of Google's slew of artificial intelligence announcements today, the company is releasing a number of AI web experiments powered by its cloud services that anyone can go and play with. One -- called Quick, Draw! -- gives you a prompt to draw an image of a written word or phrase in under 20 seconds with your mouse cursor in such a way that a neural network can identify it. It's both a hilarious and fascinating exercise with broader implications for how AI can self-learn over time in key AI… Tagged In Facebook Artificial Intelligence Machine Learning Artificial Neural Network Optical Character Recognition Tagged In Artificial Intelligence Application Programming Interface Open Source Machine Learning Bird Google believes the key to growing its cloud computing business is artificial intelligence. As part of Google's slew of artificial intelligence announcements today, the company is releasing a number of AI web experiments powered by its cloud services that anyone can go and play with. One -- called Quick, Draw! -- gives you a prompt to draw an image of a written word or phrase in under 20 seconds with your mouse cursor in such a way that a neural network can identify it.